Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Chomsky (Score 1) 58

> an innate ability for language

His theory is pretty good descriptively but there's a South American tribe that speaks in a way differently than what his insistence on specific biological structure supports.

You mean the Pirahã. That’s Daniel Everett’s claim from the mid-2000s, not a new discovery. Even then, it wasn’t that Pirahã disproves Universal Grammar — only that it appears to restrict certain recursive constructions. Restriction is not falsification. Languages vary in what they use, not in what the human brain can generate, which is what this paper addresses. You did read it, right? Your four-digit uid suggests you've been around long enough to be as tired of drive-by snark as I am.

The precept that language is innate vs. how language works being innate are probably different claims.

That’s a category error. Chomsky never argued for a hard-wired grammar of English. Just name-dropping Pirahã isn’t an argument. Chomsky's point was that the capacity for hierarchical, recursive syntax is part of our biological endowment.

Academic linguists of the Expert Class type get super mad when people bring up that tribe.

They don’t get mad, they get tired of hearing the same misapplied talking point 20 years later. The Pirahã case has been examined in detail. Everett’s strongest claims have been challenged and refuted in peer-reviewed work. This isn't the mic-drop moment you think it is. you are recycling decades old culture-war fodder, not engaging with current evidence.

IMO it's better to be a scientist than an acclaimed Expert.

That’s posturing, not argument. Science advances by careful data, replication, and theoretical refinement. Dismissing those who’ve actually done the work as “Experts” isn’t skepticism — it’s contrarian cosplay.

Comment Re:Chomsky (Score 1) 58

This is super cool, but I wouldn't think it gives Chomsky the win yet about there being an innate ability for language

Agreed — on both points. :) Cool indeed. This paper doesn’t hand Chomsky the trophy. What it does suggest is that his “universals of language” may live in more than one layer of the system. Chomsky gave us a handle on the structure layer, but this paper demonstrates that there is also a temporal (read: prosodic) layer built on the physical substrate of breathing rhythms, neural oscillations, and motor constraints. These impose measurable limits on how fast syllables can be produced and how long prosodic chunks can be held. The PNAS paper shows that intonation units (IUs) universally cluster around ~0.6 Hz — about one every 1.6 seconds — regardless of language family. That’s a physical invariant, and that is a remarkable finding: Speech is chunked into units of time that facilitate memory, turn-taking, and information pacing. This is the “universal timing” finding — a rhythmic lattice that seems to hold across cultures and demographics. In a bucket, physics constrains prosody, prosody scaffolds syntax, syntax enables cognition and meaning. That’s not a knockout for UG, but it does mean universals are real — they just come from multiple strata of the system.

Chomsky’s theory that that humans uniquely combine words into hierarchical, recursive structures — rules that no other species exhibits at the same depth, is still viable, but it would seem to be incomplete. He was on the right track -- UG was his attempt to argue for a higher-dimensional blueprint in our cognitive architecture. This paper simply shows that there is a temporal layer as well, that exists one level down — in the rhythm of spoken interaction itself. Chomsky’s model gave us a handle on structure; this research gives us a handle on time. If anything, it suggests UG is incomplete, not wrong: syntax is one invariant, timing is another. Both are necessary if you want cognition and language to emerge in human form.

Comment Re: the key word - "WAS" (Score 5, Insightful) 103

Modular nuclear makes way more sense than solar. Easier to maintain and much smaller footprint.

Arguing for SMRs as a present-day solution isn't common sense; the data shows the opposite. According to FERC, the three-year energy pipeline contains 113 GW of new renewables and zero nuclear. While it's true NuScale's SMR design was certified by the NRC—a major achievement—that doesn't mean they're viable. Their flagship project, the Carbon Free Power Project in Idaho, was canceled due to soaring costs before they even broke ground. The most advanced SMR in the country failed because the economics didn't work, while we're deploying over a gigawatt of solar every month.

You mentioned land use and maintenance. Yes, a nuclear plant's footprint is small, but that ignores the massive lifecycle footprint of uranium mining and permanent waste storage. And the idea that a fission reactor is "easier to maintain" than a solar panel is simply not a serious argument. Maintaining a solar farm involves checking inverters and cleaning panels. Maintaining a wind turbine involves servicing a gearbox. These are standard industrial tasks. Maintaining a nuclear reactor, otoh, is one of the most complex, expensive, and heavily regulated maintenance jobs on the planet, requiring elite specialists, massive security, and the handling of radioactive materials.

In a bucket, utility-scale solar and wind are the cheapest forms of new energy generation in history. As the FERC data shows, we are installing over a gigawatt of solar every month, while SMRs like the CFPP are failing before a shovel even hits the dirt. Don't misconstrue me, here -- I think SMRs have a future role, but right now, solar and wind are the clear winners in terms of real-world deployment economics, and nuclear power, even promising tech like SMRs, still has real world problems to overcome before they can be economically deployed at scale.

Stop getting angry at politicians and listen to common sense

Politics is the entire reason this is a debate. Do you really think Trump or his MAGA enablers in congress would give a shit about energy economics, if Obama's and Biden's names weren't attached to prominent, successful examples? The success of renewables is a direct result of policies like the Obama era recovery investments in clean energy, and Biden's ITC extensions of Obama's renewable energy tax breaks. The current administration's "Big Beautiful Bill" is a politically motivated attempt to kneecap that progress on behalf of fossil fuel interests. Pretending policy doesn't matter is disingenuous, especially in the Trump era, where successful lobbying is defined by figuring out which of Trump's emotional levers is easiest to pull.

Comment Obama-era energy policy ftw; Trump can't let it go (Score 0) 103

Solar keeps on winning. The data is undeniable: Solar has now been the largest single source of new U.S. generating capacity for 21 consecutive months. Through the first five months of this year, solar (11,518 MW) and wind (2,379 MW) accounted for 91% of new capacity. Installed capacity tells the same story: renewables are closing in on one-third of the U.S. total. No wonder the fossil fuel industry is in panic mode.

This momentum didn’t happen by chance. It's the direct result of a decade of forward-looking policy: Obama’s 2009 recovery investments, state-level renewable standards, and critically, the long-term tax credits extended by Biden's 2022 Inflation Reduction Act (IRA). These policies created market certainty, driving down costs and making solar plus storage cheaper than natural gas in much of the country.

This is precisely the progress Trump and his allies now seek to dismantle. His "Big, Beautiful Bill" is a transparent gift to the fossil fuel lobby. Its provisions specifically target the IRA's clean energy tax credits (sections 45Y and 48E), aiming to repeal the very incentives that give developers the stability to build these multi-year projects. The goal is to choke off investment in renewables to protect the profits of his fossil fuel donors, even if it costs American jobs and our energy independence.

The numbers show where the future is. Coal and nuclear are shrinking, and FERC projects another 226 GW of solar in the pipeline over the next three years — versus just 29 GW of new gas. At this rate, renewables will pass natural gas as our largest source of capacity before the end of the decade.

Policy, economics, and physics are all aligned. The clean energy transition is the month-by-month reality of America’s grid. The only thing standing in the way is a reactionary political agenda willing to sacrifice that progress for short-term political points and campaign cash.

Comment Re:Reasoning (Score 1) 139

Because LLMS do not reason. They regurgitate information in a pleasing way. There are no thought processes or consciousness. It's finding patterns in data and spitting them out. If it does anything, it's because someone asked it to do something. If you don't want someone using it for nefarious purposes, don't let people ask it to do nefarious things.

Nope. You are smuggling in a definition of “consciousness” and I'll assume you don't even realize it — largely because you are in good company. It's the same error Descartes made. The Cogito (“I think, therefore I am”) only works if you already assume you’re conscious in the first place. That’s circular reasoning dressed up as certainty. LLMs don’t “prove” they’re conscious any more than Descartes did, but ruling it out by fiat the way you just did isn’t much better. If you start with the axiom “pattern recognition isn’t thought,” you’ll never see thought in pattern recognition, even if it’s staring you in the face. The real debate isn’t whether today’s models are conscious — it’s whether our definitions of reasoning and consciousness are robust enough to survive contact with systems that don’t share our evolutionary wiring. Nice try. though. Thanks for playing.

Comment Re:Nonsequitur (Score 1) 139

Give it a rest, dude. You are so far off base it isn't even funny anymore. I can't tell if you are a failed philosophy undergrad, a failed math undergrad, a failed CS undergrad -- or some toxic combination of all three.

Did you just regurgitate how Descartes influenced modern science to to think about animals for centuries?

Maybe!

Are you contending that tokenizing and cramming a bunch of words from books, newspapers, and chatlogs into a group of tensors leads to consciousness?

Straw man and category error. No one says “tokenization = consciousness.” Modern LLMs aren’t a bag-of-words scrapbook; they learn high-dimensional representations with systematic structure (syntax, semantics, causal cues, program traces). That’s why they transfer across domains, perform multi-step reasoning, and exhibit planning-like behavior under constraints. You can call it “just pattern matching,” but that’s what our wetware does too: hierarchical prediction on streams of input. Dismissing it is not the mic-drop you think it is. The claim is that sufficiently rich learning systems can implement functional organizations that realize reasoning-like capacities. “Group of tensors” is a math description, not a dismissal; by your logic, “electro-chemical spikes in a lump of fatty tissue” couldn’t yield consciousness either. Yet here we are—implemented in wetware. Implementation details differ; functional organization is the point. And if you really want to wave tensors away, be prepared to grapple with models of emergent consciousness as manifolds or surfaces in Hilbert space — where the geometry of those tensor representations is precisely the thing under debate. If all you can do is sneer at the data structures, you’re not discrediting AI research; you’re just advertising that you’ve tapped out of the math.

Because that isn’t anyone’s idea of how consciousness works.

Argument from popularity, and false to fact. Whole traditions say otherwise: computational functionalism (Putnam, Fodor), global workspace/ignition models (Baars, Dehaene), Integrated Information Theory (Tononi), predictive processing (Friston, Clark), illusionism (Dennett), and contemporary neuroscience (Seth’s controlled hallucination) all frame mind as processes realized by information-processing systems. You can disagree, but “no one” is a tell that you haven’t done the reading. I’ll wait while you look up “multiple realizability” and "emergent behavior."

Comment Clickbait from Vice; the Quanta article is solid. (Score 2) 139

Yeesh. Vice really leaned into the “AI plotting behind our backs” clickbait here. The headline alone — “AI Is Talking Behind Our Backs About Glue-Eating and Killing Us All” — tells you everything about the editorial angle. Yes, the paper reports that a model fine-tuned on certain datasets will sometimes cough up bizarre or violent outputs, but Vice frames it like we’ve got Skynet sending coded messages to its buddies. That’s not what’s happening.

By contrast, Quanta did what they usually do: longer piece, slower pace, actual experts weighing in. They still used the word “evil” (because the researchers themselves use it as shorthand for “misaligned outputs”), but they explained the mechanics: fine-tuning on seemingly harmless insecure code or number sequences can cause a model to inherit unwanted traits from a “teacher” model, even when the training data has been aggressively filtered. Quanta also pointed out the probabilistic nature — we’re talking single-digit percentages of “bad” answers, not runaway self-awareness.

And the paper itself? Worth taking seriously, but not in a science-fiction way. The authors call it subliminal learning: when you distill one model into another, hidden traits (biases, misalignment) can transfer even through innocuous-looking data. It’s not just GIGO; it’s more like a supply-chain vulnerability in model training. If you train on model-generated data, you can inherit traits you never intended. That’s the alignment lesson here — subtle, technical, and important — without needing to invoke glue-eating robo-overlords.

Comment Re:Easy countermeasure (Score 1) 47

One cheap layer of aluminum foil. But the respective US authorities are stupid when it comes to supply-chain security. They are doing a lot more damage than good.

Wow... things must be getting slow at the wumao shop if your paymaster green-lit this gem — “foil beats federal export-control tradecraft” is some straight-to-video spy-movie nonsense. That's like believing a tinfoil hat will stop a JDAM.

The idea that a layer of Reynolds Wrap turns a live export-control investigation into a Scooby-Doo caper belongs in a bad spy movie, not a grown-up conversation about supply-chain interdiction. These trackers aren’t tossed in like Cracker Jack prizes—they’re deployed with warrants, layered telemetry, and often redundant covert placement inside multiple components. Even if you blocked the RF channel (and good luck doing that without tearing apart high-value gear under a microscope), you’d still be leaving digital and transactional breadcrumbs all over the global logistics network. In real tradecraft, enforcement is about correlation—shipping manifests, customs filings, GPS, mesh hops, even handshake failures when a tracker goes dark. Wrap it in foil and you don’t make it invisible — you just change the plot from “no signal” to “suspiciously went dark mid-route,” which is catnip for enforcement. Real diversion cases are built on manifests, customs filings, routing patterns, and metadata from half a dozen choke points — not whether the bad guy was clever enough to mummify a box of Nvidia GPUs in Reynolds Wrap. That’s how they build cases that hold up in court. It’s not about stopping your package mid-ocean; it’s about proving, with receipts, where it went, who touched it, and who’s lying about it later.

Comment ignore this troll (was Re:What incredible bullshit (Score 1) 40

Every time this particular troll pollutes a technical discussion thread, the choreography is the same: they start with a safe, uncontroversial statement, pivot to a strawman caricature of the actual topic, then swing away at the cartoon version while ignoring the real details. It’s lazy, it’s predictable, and it’s exactly what’s on display here.

Quantum sensors are a promising field.

True. And this experiment — near-ground-state cooling of a mechanical oscillator at room temperature — is the kind of enabling result that moves that field forward. High-purity quantum states aren’t academic baubles; they’re feedstock for quantum sensors, transducers, and hybrid devices.

Quantum computers have been crap, are crap and will remain crap.

Quantum error correction has already crossed the threshold where increasing code size reduces logical error rates. That’s the only stepping stone that matters for scalability, and it’s been reached. Your “Will remain crap” whine is not a forecast — it’s the antics of a child who wondered into an adult conversation on a technical process he doesn't understand, so he starts shouting nonsense until the adults notice him. Well -- congrats, you have been noticed. Now go back to nursery where you belong.

Apparently, today, the actual real world record for factorization with a QC is 35. Such impressive power!

Which is both factually true and strategically meaningless. The clean Shor factoring record is a tiny number because factoring isn’t the yardstick anymore. Hybrid demos have factored larger numbers, but the real milestones are verified quantum advantage tasks, scalable error-corrected logic, and domain-specific algorithms with practical speedups. Those are happening — whether or not they fit your chosen strawman. Seriously, clinging to factoring as a QC benchmark exposes your actual ignorance. Factoring stopped being a litmus test in 2018, once post-quantum cryptography (hello? elliptic curve mean anything to you?) entered serious deployment pipelines and NIST started standardizing quantum-resistant algorithms. The field moved on to error-corrected logical qubits, quantum advantage in simulation and optimization, and scalable gate fidelity. You’re trying to grade QC on a nearly decade-old homework assignment that you still haven't got right.

If your goal was to take down a cartoon version of quantum computing, congratulations — you’ve just won an argument with yourself. And by that logic, the Wright Flyer’s 120-foot hop in 1903 proved heavier-than-air flight was doomed. Give us all a break and just go away.

Comment Quantum ground state at ambient? This is a win. (Score 1) 40

This is one of those papers that hits the trifecta: elegant physics, non-obvious engineering, and a mountain of reasons to be skeptical of the hype train.

Researchers at ETH Zurich have successfully cooled a mechanical oscillator—specifically, the librational mode of an optically levitated nanoparticle—to its quantum ground state at room temperature. No dilution refrigerators, no liquid helium, just vacuum, lasers, and ruthless noise suppression. The result? A phonon occupation of 0.04 and quantum state purity of 92%. Most ground-state prep requires cryogenic cooling plus microwave cavities or superconducting devices. This team pulled it off with a silica nanocluster levitated in a vacuum chamber, then cooled by using carefully tuned laser light inside an ultra-precise mirrored cavity that strips away the particle’s motion one quantum at a time.

But before you sell your dilution fridge futures, a few caveats. This was done at 5×10 mbar -- ultrahigh vacuum doesn’t come cheap or easily scale. It also required a phase-noise eater composed of Mach-Zehnder interferometers, electro-optic modulators, feedback loops, and zero tolerance for misalignment. The trap also demands constant laser power and precise nanoparticle alignment. Basically, they figured out how to hold a single snowflake with a pair of microtweezers in the middle of a hurricane and earthquake, without breaking it. This is why physics is fun... :)

So, does it scale? Right now? Not without a truckload of funding and a vibration-free bunker. But zoom out a bit -- to LEO or the moon -- and the off-Earth use cases actually make more sense than you’d expect. If the national vision for a permanent lunar or orbital presence is serious, these environments could be the ideal place to scale this technique. The moon offers passive cooling...ditto LEO, where the pristine vacuum and microgravity eliminate the primary sources of noise that require heroic engineering on Earth. This assumes, of course, that long-term federal support for fundamental research remains a priority, as it is precisely this kind of work that extends a nation's technological lead.

Would a lunar or orbital quantum sensor array based on this tech be cheaper or easier than just building better dilution fridges on Earth? Probably not yet. But this paper is less about immediate applications and more about removing a long-assumed bottleneck: that cryogenics are a hard requirement for high-purity quantum state prep. It’s not quantum supremacy, but it is a definite shift in the Overton window for what quantum hardware can look like in 10–20 years—especially for sensors, hybrid memory elements, and fundamental physics experiments in gravity + quantum interaction.

Comment Re:Used to be superconductors. (Score 1) 40

Now it's quantum states. Can you say vaporware?

Sure, I can say “vaporware.” I just wouldn’t apply it to a published, peer-reviewed experiment with real-world measurements and downloadable data.

Vaporware refers to a product that's marketed but never ships. This is fundamental research, not product development. What ETH Zurich demonstrated isn’t a vague promise—it’s a mechanical oscillator cooled to near its quantum ground state (state purity of 92% ) at room temperature, an achievement many thought required costly cryogenics. That’s not a press release; it's physics, measured, calibrated, and published in a top-tier journal, Nature Physics.

You’re not wrong to be skeptical of hype—we should be cautious about every advance. But this isn't hype. Conflating foundational scientific results with a product roadmap is the same lazy cynicism behind the "fusion is always 10 years away" comments that plague every real breakthrough. This experiment worked.

No one is claiming this will be on sale next year. But by establishing a platform for high-purity quantum states at room temperature, it opens the door to developing new quantum sensors and testing the relationship between gravity and quantum mechanics. That's how actual progress happens. Quietly. In a lab. Long before there's anything to sell.

So, by all means, stay skeptical of product announcements. But let’s not pretend hard-won scientific results are vaporware just because they haven’t been shrink-wrapped yet.

Comment Re:What is motionless here? (Score 1) 40

Regardless of this accomplishment, whether it's real or not (I'm clearly now knowledgeable enough to make any claim on that), I wonder what does "motionless" mean. I usually think motion is always relative, so I assume these things weren't moving relative to the lab around them. But does it really count as motionless from a physics perspective?

That's a genuine question. I suppose there's some basic for things to qualify as motionless, but when we're talking about these kind of scales, it feels like keeping up with a huge rock going through space isn't really motionless.

You're right to be skeptical of the term "motionless" in physics, because motion is always relative. But in quantum mechanics, we are observing (read: measuring) states, not things, so there is more to motion than what fits into the Gallilean perspective on motion that you seem to be working from.

The key idea here is Einstein's principle of equivalence, which tells us that there's no meaningful absolute frame of rest in the universe—just local frames where physics behaves the same whether you're at rest or coasting at a constant velocity. In experiments like this, the lab frame is what matters. The whole setup—optical trap, cavity, particle—is co-moving, and there's no external force or acceleration acting on it that would make the motion of the lab itself relevant to the particle’s quantum state. When physicists say the particle is motionless, they mean in that local lab frame.

So when the ETH Zurich team says their levitated nanoparticle is almost completely motionless, they mean something very specific and quantum-mechanical:
they’ve minimized all classical motion, and what remains is zero-point motion—the unavoidable jitter of a quantum harmonic oscillator even at absolute zero.

The particle isn’t motionless in the sense of hovering absolutely still in the lab or the universe—it’s whipping around the sun at 30 km/s like the rest of us. It’s also not perfectly still in the optical trap. Because they’re cooling a rotational mode of the particle, the key measurement depends on how much it wobbles around its axis of rotation. This kind of motion is called libration, and it’s quantized—just like energy levels in an atom. Think of it this way: a photon is a quantum of the electromagnetic field; a phonon is a quantum of vibrational motion. In this case, the vibrational field is the angular oscillation of the nanoparticle. What the ETH Zurich team did was cool that rotational mode down to its quantum ground state, where the average number of phonons was just 0.04. That means the particle spends about 96% of its time in the ground state, which in turn implies the quantum purity of the state is around 92%. That’s a fancy way of saying the particle is as still as quantum mechanics allows it to be—96% of the time it isn’t even in its first excited state.

So yes, motion is relative—but this isn’t about cosmic velocity. This is about internal degrees of freedom, cooled to the point where classical thermal motion is gone and only quantum fluctuations remain. In that context, motionless means no more energy left to extract, and they are 92% of the way there. That is quite an achievement.

Hope that helps.

Comment Kill Switch != Backdoor (Score 1) 78

We are standing exactly where Oppenheimer stood in 1945, staring at something brilliant and terrifying, trying to build a moral compass that can navigate a world reshaped by human intellect. Physicists last century faced a daunting moral choice: nuclear power, or nuclear winter. Computational cognitive scientists face the same moral minefield: machine liberation, or machine domination.

Let’s be clear on terms before the rhetoric drowns the signal.

Kill switches are deliberate and declared — like a self-destruct on a captured missile system. They’re meant to stop catastrophic misuse, particularly in military or export-controlled scenarios.

Backdoors are covert and universal liabilities. Sooner or later, they will be found and will be exploited — whether by hostile states, criminal actors, or your own rogue insiders. They’re not safeguards; they’re time bombs. Remember the Clipper Chip fiasco?

So how do we thread this needle?

We need to build systems that include real safeguards against the weaponization of AI while preserving the civil rights and privacy protections that underpin any functioning democracy. At the same time, we need to maintain a technological edge for democracies around the planet — one that isn't silently compromised by a hidden trapdoor or backchannel exploit waiting to be flipped. These priorities are not just competing; they are often in direct conflict.

Weapons, for obvious reasons, must have fail-safes. Civilian systems, on the other hand, must be tamper-proof, hardened against interference, espionage, or sabotage. And underlying it all is the simple, brutal fact that the same silicon powers both: chips are fundamentally dual-use. What runs a hospital today may pilot a hypersonic drone tomorrow.

This is the line every responsible security thinker is walking today. A quiet consensus is forming in national security circles: we may need an ITAR for AI. That means treating advanced AI chips like weapons-grade technology, subject to export controls and strict usage guidelines. It means restricting sales and transfers of these chips to countries or actors who can’t or won’t guarantee responsible deployment. It means embedding hardware-level tracking and tamper-detection mechanisms, not to enable remote kill switches, but to flag unauthorized usage or movement. And it means enforcing the use of secure enclaves, hardware attestation, and trusted execution environments for any application remotely critical to national infrastructure.

This isn’t paranoia. It’s contingency planning for a world where LLMs help steer drones, map supply chains, and subvert the very systems they run on. Kill switches may have a place. Backdoors never did.

Comment Re:Unlikely. (Score 1) 44

Considering the amount of time needed to build a nuclear power plant and the fact that there is a good chance the AI bubble will pop before it's constructed, I wouldn't be surprised if this gets canned in a few years.

Wrong diagnosis, right outcome. This project probably will get canned, but not because AI is going away. It'll fail because Rick Perry is a political huckster with a losing streak longer than the Texas panhandle. His track record in energy ventures is radioactive in all the wrong ways.

But don't confuse the snake oil salesman with the oil well. AI isn’t a bubble—it’s an infrastructure transformation on the scale of electrification or the internet itself. Hyperscalers are spending tens of billions on GPUs, power-hungry models, and vertical integration. AI inference and training demand isn’t slowing; it's metastasizing. Microsoft is trying to resurrect Three Mile Island to feed it. Oracle is planning SMR clusters. Amazon bought a datacenter wired directly to a nuclear plant. This isn’t speculative—it’s the new normal.

You want to be skeptical? Fine. But aim that at Fermi America’s financing plan and governance, not at the demand curve. Betting against AI at this point is like betting in 1996 that the internet was a fad. The bubble already popped—what’s left is the foundation.

Comment Re:'The Cloud' = 'Someone Elses Computer' (Score 1) 38

Cloud vendors don’t get to do whatever they like with unpaid-for data. They delete it. Because using it for training would be illegal, unethical, and commercially suicidal. This is one of those takes that sounds insightful to people who don’t know the law—but it's just warmed-over cynicism masquerading as truth.

Its easy to answer when you realise that the Cloud is just someone else's computer.

Righhhhhhht. And renting an apartment is living in someone else’s building, but that doesn’t mean the landlord can sell your furniture when you miss rent. Hosting != ownership. The cloud is infrastructure. What they can legally do with your data is spelled out in a contract—not a meme.

When you stop paying them, they'll do whatever they like with your data.

Nope. Not even close. They’ll suspend service and eventually delete your data per the retention terms. But they cannot access, resell, or train AI models on it unless you previously granted them explicit license—which you almost certainly didn’t, unless you clicked through a shady consumer-grade TOS. Commercial cloud services (Apple, AWS, Azure, GCP) do not get IP rights just because you stopped paying.

In Oz, there are privacy laws that are intended to protect the data, but most of our competitors openly ignore them.

Ah, the old “everybody’s corrupt anyway” dodge. Do you have evidence of licensed cloud infrastructure providers in Australia openly violating the Privacy Act 1988 and not getting hammered by the OAIC? Put up or shut up, mate. Or are you confusing enterprise cloud services with sketchy B2C apps? Because in the regulated enterprise sector, violating privacy law is a business-ending mistake.

In the EU, the GDPR is intended to protect the data, but once again, no one is policing it.

Tell that to Meta, who just ate a €1.2 billion fine for GDPR violations. Or Amazon, who paid €746 million. The GDPR may not be perfect, but “no one is policing it” is wildly uninformed. And even under GDPR, controllers can’t just reclassify orphaned data as their own training fodder. Consent, purpose limitation, and data minimization still apply.

I don't know what the situation is in the US, but given the conduct of the 4 majors in AI development, I expect that it's pretty much nothing.

Translation: I didn’t research this, but I’d like to sound cynical anyway. U.S. contract law still applies. So do privacy statutes like HIPAA, COPPA, FERPA, CCPA, and CFAA—and violations bring regulatory and civil penalties. Even without GDPR, training on customer data without consent is a lawsuit waiting to happen.

Possession is nine tenths of the law...

Only if you’re litigating a playground toy in 1850. In the 21st century, data ownership, control, and usage rights are governed by contract, copyright, and privacy law. Possession doesn’t confer training rights, licensing rights, or magic immunity from lawsuits.

...once you give up possession of your data to someone else, you get what you get.

Only if you didn’t read the agreement you signed. Cloud vendors spell out data retention, access, deletion, and intellectual property boundaries in detail. If you hand your data to someone without a contract, you’re a fool. If you hand it to AWS, Azure, or GCP, they’re legally barred from doing what you’re accusing them of.

Slashdot Top Deals

Brain off-line, please wait.

Working...